An Accelerated First-Order Method for Solving Unconstrained Polynomial Optimization Problems
نویسندگان
چکیده
Our interest lies in solving large-scale unconstrained polynomial optimization problems. Because interior-point methods for solving these problems are severely limited by the large-scale, we are motivated to explore efficient implementations of an accelerated first-order method to solve this class of problems. By exploiting special structural properties of this problem class, we greatly reduce the computational cost of the first-order method at each iteration. We report promising computational results as well as a curious observation about the behavior of the first-order method for unconstrained polynomial optimization.
منابع مشابه
An accelerated first-order method for solving SOS relaxations of unconstrained polynomial optimization problems
An accelerated first-order method for solving SOS relaxations of unconstrained polynomial optimization problems Dimitris Bertsimas , Robert M. Freund & Xu Andy Sun To cite this article: Dimitris Bertsimas , Robert M. Freund & Xu Andy Sun (2013) An accelerated first-order method for solving SOS relaxations of unconstrained polynomial optimization problems, Optimization Methods and Software, 28:3...
متن کاملAn Accelerated First-Order Method for Solving Unconstrained SOS Polynomial Optimization Problems
Our interest lies in solving large-scale unconstrained SOS (sum of squares) polynomial optimization problems. Because interior-point methods for solving these problems are severely limited by the large-scale, we are motivated to explore efficient implementations of an accelerated first-order method to solve this class of problems. By exploiting special structural properties of this problem clas...
متن کاملA Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملA limited memory adaptive trust-region approach for large-scale unconstrained optimization
This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...
متن کاملSolving the Unconstrained Optimization Problems Using the Combination of Nonmonotone Trust Region Algorithm and Filter Technique
In this paper, we propose a new nonmonotone adaptive trust region method for solving unconstrained optimization problems that is equipped with the filter technique. In the proposed method, the various nonmonotone technique is used. Using this technique, the algorithm can advantage from nonmonotone properties and it can increase the rate of solving the problems. Also, the filter that is used in...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011